ACEEE Int. J. on Information Technology, Vol. 01, No. 02, Sep 201 1
A Novel Optimum Technique for JPEG 2000
Post Compression Rate Distortion Algorithm
Shaik.Mahaboob Basha 1 , Dr. B.C.Jinaga 2
1 Department of Electronics and Communication Engineering, Priyadarshini College of Engineering and Technology,
Nellore, India
Email: mohisin7@yahoo.co.in
2 Retired Professor, Department of Electronics and Communication Engineering, J. N.T. University, Hyderabad, India
Email: jinagabc@yahoo.co.in
Abstract — The new technique we proposed in this paper based
on Hidden Markov Model in the field of post compression rate
distortion algorithms certainly meet the requirements of high
quality still images. The existing technology has been
extensively applied in modern image processing. Development
of image compression algorithms is becoming increasingly
important for obtaining a more informative image from several
source images captured by different modes of imaging systems
or multiple sensors. The JPEG 2000 image compression
standard is very sensitive to errors. The JPEG2000 system
provides scalability with respect to quality, resolution and
color component in the transfer of images. But some of the
applications need certainly qualitative images at the output
ends. In our architecture the Proto-object also introduced as
the input ant bit rate allocation and rate distortion has been
discussed for the output image with high resolution. In this
paper, we have also discussed our novel response dependent
condensation image compression which has given scope to go
for this post compression Rate Distortion Algorithm (PCRD)
of JPEG 2000 standard. This proposed technique outperforms
the existing methods in terms of increasing efficiency,
optimum PSNR values at different bpp levels. The proposed
technique involves Hidden Markov Model to meet the
requirements for higher scalability and also to increase the
memory storage capacity.
Index Terms— JPEG 2000, Hidden Markov Model, Rate
distortion, Scalability, Image compression, post compression
I. Introduction
JPEG 2000 is a new digital imaging system that builds on
JPEG but differs from it. It utilizes a Wavelet transform and an
arithmetic coding scheme to achieve scalability in its design
and operation. It offers improved compression, better quality
for a given file size under most circumstances. This is
especially true at very high compression. As a result a greater
emphasis is being placed on the design of new and efficient
image coders for voice communication and transmission.
Today applications of image coding and compression have
become very numerous. Many applications involve the real
time coding of image signals, for use in mobile satellite
communications, cellular telephony, and audio for
videophones or video teleconferencing systems. The recently
developed Post compression Rate Distortion Algorithms for
JPEG 2000 standard 2000 standard, which incorporates
wavelet at the core of their technique, provides many excellent
features compared to the other algorithms. From the time
David Taubman introduced Post compression Rate Distortion
©2011 ACEEE
DOI:01.UiT.01.02.557
Algorithms for JPEG 2000 in terms of scalability, many
algorithms have been forced into various fields of applications.
The Discrete Wavelet Transform coding (DWT) is the widely
used transform technique in JPEG 2000 applications. The
applications require some improvements in scalability,
efficiency, memory storage capacity.So; we proposed Hidden
Markov Model technique in this paper to JPEG 2000 still image
compression standard. The outline of the paper is as follows.
Section II about the background of the proposed technique
which involves overview of JPEG 2000 and Response
Dependent Condensation Image Compression Algorithm.
Section III describes about the Methodology of the
architecture. Section IV gives simulation results of our work.
Conclusion appears in Section V.
H BACKGROUND
A. Overview ofJPEG2000
The JPEG 2000 has Superior low bit-rate performance at
all bit rates was considered desirable, improved performance
at low bit-rates, with respect to JPEG was considered to be
an important requirement for JPEG2000. Seamless compression
of image components each from 1 to 16 bits deep, was desired
from one unified compression architecture. Progressive
transmission is highly desirable when receiving imagery over
slow communication links. Code-stream organizations which
are progressive by pixel accuracy and quality improve the
quality of decoded imagery as more data are received. Code-
stream organizations which are progressive by "resolution"
increase the resolution, or size, of the decoded imagery as
more data are received. [19]. Both lossless and lossy
compression was desired, again from single compression
architecture. It was desired to achieve lossless compression
in the natural course of progressive decoding. JPEG 2000 is
also having other salient features such as code stream
accessing random manner, processing, robustness to bit-
errors and sequential build-up capability. Due to this the JPEG
2000 is having the quality to allow for encoding of an image
from top to bottom in a sequential fashion without the need
to buffer in an entire image. This is very useful for low memory
implementations in scan-based systems. [19] In the
JPEG2000 core coding system, the sample data
transformations, sample data coding, rate-distortion
optimization, and code stream reorganization. The first sample
data transformations stage compacts the energy of the image
through the Discrete Wavelet Transform (DWT), and sets
49
^cACEEE
ACEEE Int. J. on Information Technology, Vol. 01, No. 02, Sep 201 1
the range of image samples. Then, the image is logically
partitioned into code locks that are independently coded by
the sample data coding stage, also called Tier- 1 . [21]
B. Response Dependent Condensation Algorithm
Our Response Dependent Condensation Image
Compression Algorithm is to develop an array error packing
addressing methodology from original image and error image
and it depends on the application. The compression ratios of
different transforms have observed. To calculate compression
ratio, a 2-dimensional 8X8 image was considered. First image
is converted into binary format then it is processed. The
output is also binary format and it is given to MATLAB to
reconstruct the output. The simulation results using the
hybrid transform has given better results compared to other
transformation techniques(DCT-Discrete Cosine Transform,
DFT- Discrete Fourier Transform, DST-Discrete Sine
Transform, DWT- Discrete Walsh Transform, DHT- Discrete
Hartley Transform). Wavelet analysis is capable of revealing
aspects of data that other signal analysis techniques such as
Fourier analysis miss aspects like trends, breakdown points,
discontinuities in higher derivatives, and self-similarity [22]
The component transform provides de-correlation among
image components (R, G, and B). This improves the
compression and allows for visually relevant quantization.
When the reversible path is used, the Reversible Component
Transform (RCT) is used, which maps integers to integers.
When the irreversible path is used the YCbCr transform is
used as is common with the original JPEG 2000. [22] The
dynamic condensation matrix is response dependent and the
corresponding condensation is referred as Response-
Dependent Condensation. The dynamic condensation matrix
is defined as the relations of an eigenvector between the
input and output. This novel approach of studying linear
effects in JPEG 2000 compression of color images. The DCT
has been performed on the bench mark figures and each
element in each block of the image is then quantized using a
quantization matrix of quality level 50. At this point many of
the elements become zeroed out, and the images takes up
much less space to store. The image can now be
decompressed using proposed algorithm. At quality level 50
there is almost no visible loss in this image, but there is high
compression. At lower quality levels, the quality goes down
by a lot, but the compression does not increase very much.
Similarly, experiments are also conducted to various images
to find out the compression. The Response Dependent
Compression Algorithm is applied to calculate the image
compression. It can be observed that noise is slightly
removed but there is a huge change in image dimensions.
Response Dependent Compression Algorithm is applied to
calculate the image compression. It can be observed that
noise is slightly removed but there is a huge change in image
dimensions. This response dependent condensation
algorithm gives better results compared to other
transformation techniques. Our algorithm which discussed
about the gives an idea and scope to move further for proto-
object segmentation with reference to the scalability by
replacing the basic Discrete Wavelet Transform(DWT) with
©2011 ACEEE
DOI:01.DIT01.02.557
other waveform techniques including present technique
Hidden Markov Model (HMM) for certain applications
involving still images.
IE. METHODOLOGYOF PROPOSED ARCHITECTURE
A Hidden Markov Model
Hidden Markov Model (HMM)is the technique we are
using in our proposed work as it is best suitable for image
processing techniques mainly segmentation and
compression. The standard formula for estimating the model
according to the rate distortion and bit rate allocation can be
derived from our architecture To segment an image, the bit
rate allocation that is in terms of pixel to pixels of the Proto-
image we are giving as the input image handled easily with
HMM. The problems in our work can be handled by HMM
as it is suitable for smoothing and statistical significance.
The probability that a sequence drawn from some null
distribution will have an HMM probability in the case of the
forward algorithm or a maximum state sequence probability
at least as large as that of a particular output sequence. If a
HMM is used to evaluate the relevance of a hypothesis for a
particular output sequence, [25] the statistical significance
indicates the false positive rate associated with accepting
the hypothesis for the output sequence.
Figure 1 .Architecture of Hidden Markov Model
The task is to compute, given the parameters of the model
and a particular output sequence up to time t, the probability
distribution over hidden states for a point in time in the past.
To compute the forward -backward algorithm is an efficient
method for computing the smoothed values for all hidden
state variables and Hidden Markov Model can represent even
P{x{k)\y{\)....,y{t)) f0lk< L __. (1)
more complex behavior when the output of the states is rep-
resented as mixture of two or more Gaussians, in which case
the probability of generating an observation is the product of
the probability of first selecting one of the Gaussians and the
probability of generating that observation from that Gaussian.
This is the reason for choosing the Hidden Markov Model
for our proposed work. The proposed architecture block
diagram of our work is described in this section as follows.The
block diagram of our proposed architecture with reference to
the paper mentioned in [23].
B. Scalability
The high buffering cost of embedded compression is
unavoidable so long as we insist on generating the embedded
bit-stream in order. An alternate approach, however, is to
process the image or sub band samples locally while
50
-fcACEEE
ACEEE Int. J. on Information Technology, Vol. 01, No. 02, Sep 201 1
producing the embedded bit-steam prior to a final
reorganization step can be significantly smaller than the image
itself, assuming that compression is achieved for
constructing embedded bit-streams[19]
C. PCRD Coding and Pre-coding stages
According to the general derived PCRD algorithm of
David Taubman and Marcellin describes algorithm which
may be used to optimize the set of code -block truncation
points, {z }, so as to minimize the overall distortion, D, subject
to an overall length constraint, L .The same algorithm may
be used to minimize the overall length subject to a distortion
constraint if desired. We refer to this optimization strategy
as post-compression rate-distortion optimization (PCRD-opt).
The algorithm is implemented by the compressor, which is
expected to computer or estimate length and distortion
contributions, L (z) and D <z) , for each truncation point, z =
o,l,...., Z. Tis information will not normally be explicitly
included in the pack-stream. As a result, the algorithm is not
easily reapplied to a previously constructed pack-stream..
as a result, the algorithm is not easily reapplied to a previously
constructed pack-stream may contain many quality layers.
[19] Input image into PO regions and BG regions, and then
reconsider both the construction of an operational RD curve
in the coding pipeline and the implementation of an efficient
rate control scheme in terms of PO regions. By using PO
region segmentation instead of tile partition, defining the
quality layer in terms of PO regions.
By assuming overall distortion is additive
r> =
/>.(»i *
-(}')
It is desired to find the optimal selection of bit stream
truncation points n t 'such that the overall distortion metric is
minimized subject to a constraint. [24]
Imp
mo
■SHKHfori
BsHj'tbJSt
"it:
0*
itfof
mean
van
Ce*ig
Figure. 2. Blockdiagram of the Proposed Architecture
R T
> r = Y1 /*.■<»■>■
-(2)
These are reflected in the partition system and coding pipe-
line of the JPEG2000 system. [23] In the coding stage the op-
erational RD curve is constructed in two steps: 1) the Tier-1
output code-stream segments with a set of truncation points
©2011 ACEEE
DOI:01.DIT01.02.557
for coding passes. The code-stream segment is the smallest
unit for constructing the operational RD curve. 2) Quality
layers of PO regions are developed in Tier-2, and this forms
the final operational curve for the further purpose of rate
control. [23] . Similarly in the post-coding stage, by using the
actual RD functions of all the compressed data, the optimal
truncation techniques attain the minimum image distortion
for a given bit rate. Our rate control scheme is based on the
estimation of RD slopes of the coding passes. Using these
estimations, the selection of coding passes to yield a target
bit rate can be performed without information related to the
encoding process, or distortion measures based on the origi-
nal image. [23] . The Quality scalability is achieved by divid-
ing the wavelet transformed image into code-blocks. After
that each code-block is encoded, a post-processing opera-
tion determines the each code -block's embedded stream
should be truncated in order to achieve a pre-defined bit-rate
or distortion bound for the whole image. This bit-stream re-
scheduling module is referred to as the Tier 2. It establishes
a multi-layered representation of the final bit-stream, guaran-
teeing an optimal performance at several bit rates or resolu-
tions. [24]
D. EBCOT Block
The coding and ordering techniques adopted by
JPEG2000 are based on the concept of Embedded Block
Coding with Optimal Truncation (EBCOT), which is the
subject of this chapter. Each sub band is partitioned into
relatively small blocks). Division of sub bands into code-
blocks, having the same dimensions in every subband. All
sub bands are depicted with the size and the code -blocks
appear to have different sizes of code-blocks. Each code-
black, B is coded independently, producing an elementary
embedded bit-stream, C Although any prefix of length, L
should represent an efficient compression of the block's
samples at the corresponding rate.
E. Quantization
The trade-off between rate and distortion is obtained by
quantization. Wavelet coefficients can be divided by a
different value for each sub-band. Alternatively, portions of
the coded data can be discarded. This discarding of data can
be done in a variety of creative ways. The proposed technique
was implemented on several bench mark figures like Lena,
River, House, Animal, River and also several sample figures
including color and black/white images and observed that
the results we got are comparably better.
Fig.3.USID benchmark (Lena)
51
ACEEE
ACEEE Int. J. on Information Technology, Vol. 01, No. 02, Sep 201 1
Table I. bpp versus computation times
Fig.4.USID benchmark (Animal)
Fig.5.USID benchmark (River)
Fig.6.USID benchmark (House)
IV. SIMULATION RESULTS
The MATLAB used as the simulation tool to prove the
better results on the bench mark figures such as Lena, River,
House , Animal , River and also several sample figures
including color and black and white images to the existing
Algorithms.
Sample
No
bpp
Computation
time (s)
1
OJ25
0.782
2
0.5
Q.S35
3
0.75
0.915
4
1.0
0.925
^
1.5
0.945
■ , i 1
"_ ^m^"" - r
■ . ■__ _-t"I-/_"^
/\f i i
3. __//__' '
i S
1 i 1
I 1 :
Figure 8. Comparison of performance
Table II. bpp versus psnr
Sample
No
bpp
PSNR. (dB)
1
0.25
26
2
0.5
2S.5
3
0.75
3025
4
1.0
31.15
5
1.5
33.75
Figure 7. Comparison of bpp Vs time
Figure 9. Comparison of bpp versus PSNR
The experimental values from the tables I, II and III clearly
shows that the new technique proposed in this paper has
better values with respect to computation times, PSNR values
at various bpp values. The complexity has been reduced by
applying Hidden Markov Model in place of the other wavelet
©2011 ACEEE
DOI:01.DIT01.02.557
52
^ACEEE
ACEEE Int. J. on Information Technology, Vol. 01, No. 02, Sep 201 1
transforms like DWT. In the post compression process the
rate distortion and bit rate allocation will generally play a
major role in various application requirements. This technique
can also utilized to perform the object region coding and
segmentation processing of different types of application
images for different applications.
TAVLE III. BPP VERSUS PSNR
Sample
No
bpp
PSNR (dB)
1
0JZ5
26.25
2
0.5
30.25
3
0.75
33.15
4
1.0
34.675
5
1.5
37.515
This is for the mobile, remote sensing, still image compression
related applications. The JPEG image compression systems
can be affected by soft errors because of their wide uses in
remote sensing and medical imaging. In such applications,
fault tolerance techniques are very important in detecting
computer induced errors within the JPEG compression
system, thus guaranteeing the quality of image output while
employing the compression data format of the standard. Fault
tolerance is different from error resilience in compression
standards. Resilience generally refers to the capabilities of
recovering from errors introduced by the transmission path
of compressed data while fault tolerance protects against
errors that are introduced by the computing resources
executing the compressing and decompressing algorithms.
Scalability is an important concept of JPEG2000 still image
compression standard. The JPEG2000 codec is transform
based, and resolution scalability is a direct consequence of
the multi-resolution properties of the Discrete Wavelet
Transform (DWT). A code stream is said to be resolution
scalable if it contains identifiable subsets that represent
successively lower resolution versions of the original image.
Since bi-level images are invariably digitized at high
resolutions, this property of the code-stream is potentially
very useful. Consider the case where high resolution images
are being viewed by a user over a network. Typically, the
image at full resolution will be too large to display on the
user's monitor. By making use of the inherent scalability of
the JPEG2000 code stream, it is possible to stream only the
relevant portions of the image to the client. This allows
JPEG2000 content to be delivered in a manner which matches
the user's display resolution. [26] [27] In JPEG2000, both the
reversible and irreversible transforms can be implemented
using a common lifting framework. In a broad sense, lifting
provides a means to generate invertible mappings between
sequences of numbers, and the invertibility is unaffected
even when arbitrary operators, which may be linear or non-
linear, are introduced in the lifting steps [26] [27] .This flexibility
allows the use of non-linear rounding operations in the lifting
steps, in order to ensure that the transform coefficients are
©2011 ACEEE
DOI:01.IJIT01.02.557
integers. In this paper we have discussed about the necessity
of enhanced scalability for various applications. For better
results the images were tested for different resolutions like5 1 2x5 1 2,
256 x256.
Conclusions
This paper proposes new technique in the field of Post
compression Rate Distortion Algorithms for JPEG 2000 with
Hidden Markov Model technique which outperforms the other
existing methods in terms of scalability. It also showed better
results in PSNR versus bpp and the computational complexity
has been reduced considerably which increases the
efficiency and memory storage capacity.
Acknowledgment
The first author would like to thank Dr.I.Gopal Reddy-
Director, Dr.O.Mahesh-Principal and the management
committee of Priyadarshini College of Engineering &
Technology, Nellore for their encouragement in doing this
work and also very grateful to the different authors cited in
the references.
References
[I] J Kliewer, A. Huebner, and D. J. Costello, Jr., "On the
achievable extrinsic information of inner decoders in serial
concatenation," in Proc. IEEE International Symp. Inform. Theory,
Seattle, WA, July 2006.
[2] S. ten Brink, "Code characteristic matching for iterative
decoding of serially concatenated codes," Annals Telecommun., vol.
56, no. 7-8, pp. 394-408, July-Aug. 2001.
[3] F Brannstrom, L. Rasmussen, and A. Grant, "Optimal
puncturing ratios and energy distribution for multiple parallel
concatenated codes," IEEE Trans. Inform. Theory, vol. 55, no. 5,
pp. 2062-2077, May 2009.
[4] R R. Thobaben, "EXIT functions for randomly punctured
systematic codes," in Proc. IEEE Inform. Theory Workshop (ITW),
Lake Tahoe, CA, USA, Sept. 2007.
[5] V Mannoni, P. Siohan, and M. Jeanne, "A simple on-line turbo
estimation of souurce statistics for joint-source channel LC decoders:
application to MPEG-4 video," in Proc. 2006 International Symp.
Turbo Codes, Munich, Germany, Apr. 2006.
[6] C. Guillemot and P. Siohan, "Joint source-channel decoding of
variable length codes with soft information: a survey," EURASIP J.
Applied Signal Processing, no. 6, pp. 906-927, June 2005.
[7] J Hagenauer, E. Offer, and L. Papke, "Iterative decoding of
binary block and convolutional codes," IEEE Trans. Inform. Theory,
vol. 42, no. 2, pp. 429-445, Mar. 1996.
[8] AAshikhmin, G Kramer, and S. t. Brink, "Extrinsic information
transfer functions: model and erasure channel properties," IEEE
Trans. Inform. Theory, vol. 50, no. 11, pp. 2657-2673, Nov. 2004.
[9] J Hagenauer and N. Goertz, "The turbo principle in joint
source-channel coding," in Proc. IEEE Inform. Theory Workshop
(ITW), Paris, France, Apr. 2003, pp. 275-278.
[10] H. Farid, Exposing Digital Forgeries From JPEG Ghosts [J],
IEEE Transactions on Information Forensics and Security archive,
2009,4(1):154-160
[II] D.-Galleger, Y.-Q. Shi, andW. Su, A generalized benford's law
for jpeg coefficients and its applications in image forensics [J], in
Proc. SPIE Security, Steganography, and Watermarking of
Multimedia Contents, 2007, vol. 6505, p. 58.
53
-^cACEEE
ACEEE Int. J. on Information Technology, Vol. 01, No. 02, Sep 201 1
[12] J He, Z. Lin, L.Wang, and X. Tang, Detecting doctored jpeg
images via dct coefficient analysis [J], in ECCV'06, 2006, vol. HI:
423-435.
[13] J Kovacevic and W. Sweldens, "Wavelet families of increasing
order in arbitrary dimensions," IEEE Trans. Image Process., vol. 9,
no. 3, pp. 480-496, Mar. 2000.
[14] H. A. M. Heijmans and J. Goutsias, "Nonlinear multiresolution
signal decomposition schemes-part II: Morphological wavelets,"
IEEE Trans. Image Process., vol. 9, no. 11, pp. 1897-1913, Nov.
2000.
[15] E I. Candes and D. L. Donoho, "Curvelets - A surprisingly
effective nonadaptive representation for objects with edges," in
Curve and Surf ace fitting , A. Cohen, C. Rabut, and L. L. Schumaker,
Eds. Nashville, TN: Vanderbilt Univ. Press, 2000, pp. 105-120.
[16] E Pennec and S. Mallat, "Image compression with geometrical
wavelets," in Proc. IEEE Int. Conf. Image Processing, Vancouver,
BC, Canada, Sep. 2000, vol. 1, pp. 661-664.
[17] M N. Do and M. Vetterli, "Contourlets: a directional
multiresolution image representation," in Proc. IEEE Int. Conf.
Image Processing, Rochester, NY, Sep. 2002, vol. 1, pp. I— 357— I—
360.
[18] R Claypoole, R. Baraniuk, and R. Nowak, "Adaptive wavelet
transforms via lifting," in Proc. IEEE Int. Conf. Acoustics, Speech,
and Signal Processing, May 1998, vol. 3, pp. 1513-1516.
[19] D. Taubman, M. Marcellin, and M. Rabbani, "IPEG2000:
Image compression fundamentals, standards and practice," /. vol.
11, pp. 286-287 ,2002.
[20] Wavelet Networks Approach for Image Compression by
C.Ben Amar andO. JemaiResearch Group on Intelligent Machines
(REGIM)Uni\eisily of Sfax, National Engineering School of Sfar
B.P.W, 3038, Sfax, Tunisia chokri.benamar@ieee.org and olfa.
[21] Enhanced IPEG2000 Quality Scalability through Block- wise
truncation Francesc Auli-Llinas,Ioan Serra-Sagrist 'and loan
Bartrina-Rapesta Hindawi Publishing Corporation, EURASIP
lournal on Advances in Signal Processing, Volume 2010, Article ID
803542, 1 1 pages.
[22] A Novel Response Dependent Image Compression Algorithm
to reduce the Nonlinear Effects in Color Images using JPEG' Shaik.
Mahaboob Basha and Dr.B.C.Jinaga, IEEE International
Conference,SIBIRCON 2010 on Computational Technologies in
Electrical and Electronics Engineering, Russia, vol.2, July 2010
[23 Proto-Object Based Rate Control for JPEG2000: An Approach
to Content-Based Scalability Xue, MemberJEEE, Jianru Ce Li, and
Nanning Zheng, Fellow, IEEE , IEEE TRANSACTIONS ON
IMAGE PROCESSING, VOL.20, NO. 4, APRIL 2011
[24] Detection from hyperspectral images compressed using rate
distortion and optimization techniques under jpeg2000 part
2,Vikram Jayaram Bryan E. Usevitch, Olga M. Kosheleva
[25] www.wikipedia.org
[26] D. S. Taubman and M. W. Marcellin, JPEG2000 standar
for Interactive Imaging," Proceedings of the IEEE,90(S) pp.
1336-1357,2002.
[27] Rahul Raguram, Michael W. Marcellin 'Improved Resolut-
ion scalability for Bi-level image data in JPEG2000, Rahul
Raguram, Michael W.Marcellin and Ali, Bilgin, 2007, Data
Compression Conference (DCC'07)
©2011 ACEEE
DOr.01.DTT.01.02.557
54
^ACEEE